Previous Blogs

May 19, 2025
Microsoft Brings AI Agents to Life

May 14, 2025
Google Ups Privacy and Intelligence Ante with Latest Android Updates

April 30, 2025
Intel Pushes Foundry Business Forward

April 29, 2025
Chip Design Hits AI Crossover Point

April 24, 2025
Adobe Broadens Firefly’s Creative AI Reach

April 9, 2025
Google Sets the Stage for Hybrid AI with Cloud Next Announcements

April 1, 2025
New Intel CEO Lays out Company Vision

March 21, 2025
Nvidia Positions Itself as AI Infrastructure Provider

March 13, 2025
Enterprise AI Will Go Nowhere Without Training

February 18, 2025
The Rapid Rise of On-Device AI

February 12, 2025
Adobe Reimagines Generative Video with Latest Firefly

January 22, 2025
Samsung Cracks the AI Puzzle with Galaxy S25

January 8, 2025
Nvidia Brings GenAI to the Physical World with Cosmos

2024 Blogs

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

May 20, 2025
Dell Showcases Silicon Diversity in AI Server and PC

By Bob O'Donnell

Choice is a beautiful thing. That’s particularly true when you’re a company that’s building products intended to meet the very diverse needs of a wide range of customers. So, it’s not surprising to see Dell Technologies take that approach when it comes to a new set of hardware offerings that the company is debuting at its Dell Technologies World event.

The company announced products that include AI accelerator chips from AMD, Intel, Nvidia, and Qualcomm across its line of servers and PCs. Given that AI chips now offer the broadest range of choices amongst virtually any segment of the semiconductor market, the move makes sense. Still, it’s an impressive set of offerings and highlights how many options have become available over the last few years.

What’s also interesting about the approach Dell is taking is that it reflects the increasing momentum and growing sophistication of offerings to hit the server and PC markets. After years of negative growth, enterprise servers, in particular, are starting to see a renaissance of interest once again. Companies are starting to recognize the value of running their own AI workloads and creating their own AI-capable data centers in order to do it. As a result, traditional server vendors, including Dell and its competitors, are enjoying a new level of interest.

Of course, it doesn’t hurt that Nvidia has started pushing the concept of enterprise AI factories as well. (And to Dell’s credit, it was the first to talk about the idea of AI factories via its Project Helix joint effort with Nvidia two years ago.) Nvidia has also recognized this trend and is building products and software stacks that are optimized for companies to do this kind of work.

The reasons for this are actually straightforward. According to numerous sources, most companies still have a majority of their data behind their corporate firewall. Even more importantly, the data they’ve yet to move to the cloud is often their most precious data—which also happens to be the most useful data to train and fine-tune AI models. As a result, there’s a great deal of logic in leveraging this data for running AI workloads within their own environments. It’s a classic case of data gravity, in which companies want to run the workloads where the data lives.

It’s not that enterprises aren’t continuing their investments in the cloud, but there’s increasing recognition that the two types of computing can happily co-exist. In fact, thanks to new technology standards like MCP (Model Context Protocol), distributed hybrid AI applications that leverage both public and private clouds will likely move to the mainstream in a very rapid fashion.

With that context in mind, it wasn’t surprising to see Dell continue to expand its joint AI factory offerings with Nvidia. Not only is Dell offering new AI Factory configurations of its PowerEdge XE9780 and XE9785 line of servers with Nvidia’s Blackwell Ultra chips—both liquid cooled and air-cooled—the company is also among the first to support Nvidia’s new RTX Pro architecture, which was just introduced over the weekend at Computex in Taiwan. The Dell PowerEdge XE7745 server combines traditional x86 CPUs along with Nvidia’s RTX PRO 6000 Blackwell Server Edition GPUs in an air-cooled design, making it significantly easier for many enterprises to upgrade their existing data centers. The idea is that these new servers can run traditional server workloads while also opening up the option for running certain AI workloads. These systems don’t have the high-end processing power of the most advanced Blackwell systems designed for cloud-based environments, but they have more than enough to handle many of the AI workloads that businesses will want to run within their own environments.

In addition to Nvidia-based offerings, Dell also announced a new range of Dell AI Factory PowerEdge XE9785 servers using AMD’s Instinct MI350 GPUs. Thanks to AMD’s newly upgraded ROCm software stack, these systems are seen as a viable and even slightly better performing alternative to some of the Nvidia offerings, particularly when it comes to power consumption. Equally important, they give enterprises a choice to select from instead of being locked into a single vendor.

Along similar lines, Dell also announced one of the first mainstream implementations of Intel’s Gaudi 3 AI accelerators, featuring a configuration of its PowerEdge XE9680 servers with 8 accelerator cards featuring these chips. As with the AMD offerings, this represents another alternative from companies to select from and provides a more cost-effective solution. The Intel-based solutions are particularly well-suited for organizations wanting to leverage Intel’s software stack and the range of models from Hugging Face and other sources that have been specifically optimized for the Gaudi chips.

One of the most intriguing Dell Technologies World announcements is actually on the PC side, via the launch of its new Dell Pro Max Plus portable workstation. The Dell Pro Max Plus includes the first implementation of a discrete NPU—the Qualcomm A100—in a mobile PC. By leveraging the interface typically used for discrete GPUs, Dell was able to bring this powerful new accelerator into an existing design. The A100 PC Inference Card features two discrete chips with a total of 32 AI acceleration cores and 64 GB of dedicated memory. The company is targeting the device at organizations that want to run customized inferencing applications at the edge as well as for AI model developers who want to leverage the Qualcomm NPU design (though it’s important to note that it’s a different NPU architecture than is found on the Snapdragon X series of Arm-based SOCs). Because of its large onboard memory cache, the A100  card allows the use of models with over 100B parameters directly on the device, which is significantly larger than what you can use on even today’s most powerful Copilot+ PCs.

In addition to these hardware compute advancements, Dell announced several important new software-based data access capabilities for its various Dell AI Factory server platforms via what it’s calling the Dell AI Data Platform. One of the biggest challenges in working with large AI models is getting speedy access to data and loading it into memory and Dell’s new Project Lightning is a new parallel file system that Dell claims offers twice the performance of any other similar system. Dell also made enhancements to the Dell Data Lakehouse, which is a data structure that most AI-based applications use to tap into large datasets.

All told, Dell put together what looks to be a solid set of new AI-focused offerings that give enterprises a broad range of alternatives from which to choose. Given the rapid move to AI-powered applications that the company highlighted during its opening keynote, it appears that the combination of different options it’s bringing to market should enable even the most specific demands from a given enterprise to be met in a very targeted manner.

Here’s a link to the original column: https://www.linkedin.com/pulse/dell-showcases-silicon-diversity-ai-server-pc-bob-o-donnell-yartf

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.